Towards rigorous understanding of neural networks via semantics-preserving transformations
نویسندگان
چکیده
Abstract In this paper, we present an algebraic approach to the precise and global verification explanation of Rectifier Neural Networks , a subclass Piece-wise Linear (PLNNs), i.e., networks that semantically represent piece-wise affine functions. Key our is symbolic execution these allows construction equivalent Typed Affine Decision Structures (TADS). Due their deterministic sequential nature, TADS can, similarly decision trees, be considered as white-box models therefore solutions model outcome problem. are linear algebras, which one elegantly compare for equivalence or similarity, both with diagnostic information in case failure, characterize classification potential by precisely characterizing set inputs specifically classified, where two network-based classifiers differ. All phenomena illustrated along detailed discussion minimal, illustrative example: continuous XOR function.
منابع مشابه
Verifying Parameterized Recursive Circuits Using Semantics-preserving Transformations of Nets Verifying Parameterized Recursive Circuits Using Semantics-preserving Transformations of Nets
This report will present a theorem-proving method for verifying parametrized recursive circuits. The method is based on semantics-preserving transformations of nets (SPTNs). Its theoretical bases are an algebraic calculus introduced by G. Hotz in 1965 and the hardware description language 2dL based on this calculus. The starting point of our method is a high-level description using bi-category ...
متن کاملUnderstanding Neural Networks via Rule Extraction
Although backpropagation neural networks generally predict better than decision trees do for pattern classi cation problems they are of ten regarded as black boxes i e their predic tions are not as interpretable as those of deci sion trees This paper argues that this is be cause there has been no proper technique that enables us to do so With an algorithm that can extract rules by drawing paral...
متن کاملParameterized Approximation via Fidelity Preserving Transformations
Wemotivate and describe a new parameterized approximation paradigm which studies the interaction between performance ratio and running time for any parameterization of a given optimization problem. As a key tool, we introduce the concept of α-shrinking transformation, for α ≥ 1. Applying such transformation to a parameterized problem instance decreases the parameter value, while preserving appr...
متن کاملEfficient Synthesis for Concurrency by Semantics-Preserving Transformations
We develop program synthesis techniques that can help programmers fix concurrency-related bugs. We make two new contributions to synthesis for concurrency, the first improving the efficiency of the synthesized code, and the second improving the efficiency of the synthesis procedure itself. The first contribution is to have the synthesis procedure explore a variety of (sequential) semantics-pres...
متن کاملTowards an Understanding of Neural Networks in Natural-Image Spaces
Two major uncertainties, dataset bias and perturbation, prevail in state-of-the-art AI algorithms with deep neural networks. In this paper, we present an intuitive explanation for these issues as well as an interpretation of the performance of deep networks in a naturalimage space. The explanation consists of two parts: the philosophy of neural networks and a hypothetic model of natural-image s...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal on Software Tools for Technology Transfer
سال: 2023
ISSN: ['1433-2779', '1433-2787']
DOI: https://doi.org/10.1007/s10009-023-00700-7